Goto

Collaborating Authors

 ai development


6 Graphs That Show Where the U.S. Leads China on AI--and Where It Doesn't

TIME - Tech

Two important things happened on January 20, 2025. In Washington, D.C., Donald Trump was inaugurated as President of the United States. In Hangzhou, China, a little-known Chinese firm called DeepSeek released R1, an AI model that industry watchers called a "Sputnik moment" for the country's AI industry. "Whether we like it or not, we're suddenly engaged in a fast-paced competition to build and define this groundbreaking technology that will determine so much about the future of civilization," said Trump later that year, as he announced his administration's AI action plan, which was titled "Winning the Race." There are many interpretations of what AI companies and their governments are racing towards, says AI policy researcher Lennart Heim: to deploy AI systems in the economy, to build robots, to create human-like artificial general intelligence.


The Gender Code: Gendering the Global Governance of Artificial Intelligence

Cupac, Jelena

arXiv.org Artificial Intelligence

This paper examines how international AI governance frameworks address gender issues and gender-based harms. The analysis covers binding regulations, such as the EU AI Act; soft law instruments, like the UNESCO Recommendations on AI Ethics; and global initiatives, such as the Global Partnership on AI (GPAI). These instruments reveal emerging trends, including the integration of gender concerns into broader human rights frameworks, a shift toward explicit gender-related provisions, and a growing emphasis on inclusivity and diversity. Yet, some critical gaps persist, including inconsistent treatment of gender across governance documents, limited engagement with intersectionality, and a lack of robust enforcement mechanisms. However, this paper argues that effective AI governance must be intersectional, enforceable, and inclusive. This is key to moving beyond tokenism toward meaningful equity and preventing reinforcement of existing inequalities. The study contributes to ethical AI debates by highlighting the importance of gender-sensitive governance in building a just technological future.


The Strange Disappearance of an Anti-AI Activist

The Atlantic - Technology

Sam Kirchner wants to save the world from artificial superintelligence. He's been missing for two weeks. B efore Sam Kirchner vanished, before the San Francisco Police Department began to warn that he could be armed and dangerous, before OpenAI locked down its offices over the potential threat, those who encountered him saw him as an ordinary, if ardent, activist. Phoebe Thomas Sorgen met Kirchner a few months ago at Travis Air Force Base, northeast of San Francisco, at a protest against immigration policy and U.S. military aid to Israel. Sorgen, a longtime activist whose first protests were against the Vietnam War, was going to block an entrance to the base with six other older women. Kirchner, 27 years old, was there with a couple of other members of a new group called Stop AI, and they all agreed to go along to record video on their phones in case of a confrontation with the police.


Amazon Workers Issue Warning About Company's 'All-Costs-Justified' Approach to AI Development

WIRED

Amazon Employees for Climate Justice says that over 1,000 workers have signed a petition raising "serious concerns" about the company's "aggressive rollout" of artificial intelligence tools. Over 1,000 Amazon employees have anonymously signed an open letter warning that the company's allegedly "all-costs-justified, warp-speed approach to AI development" could cause "staggering damage to democracy, to our jobs, and to the earth," an internal advocacy group announced on Wednesday. Four members of Amazon Employees for Climate Justice tell WIRED that they began asking workers to sign the letter last month. After reaching their initial goal, the group published on Wednesday the job titles of the Amazon employees who signed and disclosed that more than 2,400 supporters from other organizations, including Google and Apple, have also joined in. Backers inside Amazon include high-ranking engineers, senior product leaders, marketing managers, and warehouse staff spanning many divisions of the company.


Copyright Detection in Large Language Models: An Ethical Approach to Generative AI Development

Szczecina, David, Gaffori, Senan, Li, Edmond

arXiv.org Artificial Intelligence

The widespread use of Large Language Models (LLMs) raises critical concerns regarding the unauthorized inclusion of copyrighted content in training data. Existing detection frameworks, such as DE-COP, are computationally intensive, and largely inaccessible to independent creators. As legal scrutiny increases, there is a pressing need for a scalable, transparent, and user-friendly solution. This paper introduce an open-source copyright detection platform that enables content creators to verify whether their work was used in LLM training datasets. Our approach enhances existing methodologies by facilitating ease of use, improving similarity detection, optimizing dataset validation, and reducing computational overhead by 10-30% with efficient API calls. With an intuitive user interface and scalable backend, this framework contributes to increasing transparency in AI development and ethical compliance, facilitating the foundation for further research in responsible AI development and copyright enforcement.


AI Workers, Geopolitics, and Algorithmic Collective Action

Reis, Sydney

arXiv.org Artificial Intelligence

According to the theory of International Political Economy (IPE), states are often incentivized to rely on rather than constrain powerful corporations. For this reason, IPE provides a useful lens to explain why efforts to govern Artificial Intelligence (AI) at the international and national levels have thus far been developed, applied, and enforced unevenly. Building on recent work that explores how AI companies engage in geopolitics, this position paper argues that some AI workers can be considered actors of geopolitics. It makes the timely case that governance alone cannot ensure responsible, ethical, or robust AI development and use, and greater attention should be paid to bottom-up interventions at the site of AI development. AI workers themselves should be situated as individual agents of change, especially when considering their potential to foster Algorithmic Collective Action (ACA). Drawing on methods of Participatory Design (PD), this paper proposes engaging AI workers as sources of knowledge, relative power, and intentionality to encourage more responsible and just AI development and create the conditions that can facilitate ACA.


The UN's AI warnings grow louder

TIME - Tech

The UN's AI warnings grow louder Welcome back to In the Loop, new twice-weekly newsletter about AI. It was a busy week for our team: Tharin Pillay was on site during the UN General Assembly in New York, while Harry Booth and Nikita Ostrovsky were at the "All In AI" event in Montreal. If you're reading this in your browser, why not subscribe to have the next one delivered straight to your inbox? The United Nations General Assembly met this week in New York. While the assembly members spent much of their time on the crises in Palestine and Sudan, they also devoted a good chunk to AI.


The Collaborations among Healthcare Systems, Research Institutions, and Industry on Artificial Intelligence Research and Development

Ye, Jiancheng, Ma, Michelle, Abuhashish, Malak

arXiv.org Artificial Intelligence

Objectives: The integration of Artificial Intelligence (AI) in healthcare promises to revolutionize patient care, diagnostics, and treatment protocols. Collaborative efforts among healthcare systems, research institutions, and industry are pivotal to leveraging AI's full potential. This study aims to characterize collaborative networks and stakeholders in AI healthcare initiatives, identify challenges and opportunities within these collaborations, and elucidate priorities for future AI research and development. Methods: This study utilized data from the Chinese Society of Radiology and the Chinese Medical Imaging AI Innovation Alliance. A national cross-sectional survey was conducted in China (N = 5,142) across 31 provincial administrative regions, involving participants from three key groups: clinicians, institution professionals, and industry representatives. The survey explored diverse aspects including current AI usage in healthcare, collaboration dynamics, challenges encountered, and research and development priorities. Results: Findings reveal high interest in AI among clinicians, with a significant gap between interest and actual engagement in development activities. Despite the willingness to share data, progress is hindered by concerns about data privacy and security, and lack of clear industry standards and legal guidelines. Future development interests focus on lesion screening, disease diagnosis, and enhancing clinical workflows. Conclusion: This study highlights an enthusiastic yet cautious approach toward AI in healthcare, characterized by significant barriers that impede effective collaboration and implementation. Recommendations emphasize the need for AI-specific education and training, secure data-sharing frameworks, establishment of clear industry standards, and formation of dedicated AI research departments.


Towards Experience-Centered AI: A Framework for Integrating Lived Experience in Design and Development

Gautam, Sanjana, Chandra, Mohit, De, Ankolika, Chakravorti, Tatiana, Malik, Girik, De Choudhury, Munmun

arXiv.org Artificial Intelligence

Lived experiences fundamentally shape how individuals interact with AI systems, influencing perceptions of safety, trust, and usability. While prior research has focused on developing techniques to emulate human preferences, and proposed taxonomies to categorize risks (such as psychological harms and algorithmic biases), these efforts have provided limited systematic understanding of lived human experiences or actionable strategies for embedding them meaningfully into the AI development life-cycle. This work proposes a framework for meaningfully integrating lived experience into the design and evaluation of AI systems. We synthesize interdisciplinary literature across lived experience philosophy, human-centered design, and human-AI interaction, arguing that centering lived experience can lead to models that more accurately reflect the retrospective, emotional, and contextual dimensions of human cognition. Drawing from a wide body of work across psychology, education, healthcare, and social policy, we present a targeted taxonomy of lived experiences with specific applicability to AI systems. To ground our framework, we examine three application domains-- (i) education, (ii) healthcare, and (iii) cultural alignment--illustrating how lived experience informs user goals, system expectations, and ethical considerations in each context. We further incorporate insights from AI system operators and human-AI partnerships to highlight challenges in responsibility allocation, mental model calibration, and long-term system adaptation. We conclude with actionable recommendations for developing experience-centered AI systems that are not only technically robust but also empathetic, context-aware, and aligned with human realities. This work offers a foundation for future research that bridges technical development with the lived experiences of those impacted by AI systems.


Five ways that AI is learning to improve itself

MIT Technology Review

By the same token, Clune says, automating AI research and development could have enormous upsides. On our own, we humans might not be able to think up the innovations and improvements that will allow AI to one day tackle prodigious problems like cancer and climate change. For now, human ingenuity is still the primary engine of AI advancement; otherwise, Meta would hardly have made such exorbitant offers to attract researchers to its superintelligence lab. But AI is already contributing to its own development, and it's set to take even more of a role in the years to come. Here are five ways that AI is making itself better.